413 research outputs found
Peer-to-Peer Networks: A Mechanism Design Approach
In this paper we use mechanism design approach to find the optimal file-sharing mechanism in a peer-to-peer network. This mechanism improves upon existing incentive schemes. In particular, we show that peer-approved scheme is never optimal and service-quality scheme is optimal only under certain circumstances. Moreover, we find that the optimal mechanism can be implemented by a mixture of peer-approved and service-quality schemes.peer-to-peer networks, mechanism design.
Planar Object Tracking in the Wild: A Benchmark
Planar object tracking is an actively studied problem in vision-based robotic
applications. While several benchmarks have been constructed for evaluating
state-of-the-art algorithms, there is a lack of video sequences captured in the
wild rather than in constrained laboratory environment. In this paper, we
present a carefully designed planar object tracking benchmark containing 210
videos of 30 planar objects sampled in the natural environment. In particular,
for each object, we shoot seven videos involving various challenging factors,
namely scale change, rotation, perspective distortion, motion blur, occlusion,
out-of-view, and unconstrained. The ground truth is carefully annotated
semi-manually to ensure the quality. Moreover, eleven state-of-the-art
algorithms are evaluated on the benchmark using two evaluation metrics, with
detailed analysis provided for the evaluation results. We expect the proposed
benchmark to benefit future studies on planar object tracking.Comment: Accepted by ICRA 201
Prior Knowledge Guided Unsupervised Domain Adaptation
The waive of labels in the target domain makes Unsupervised Domain Adaptation
(UDA) an attractive technique in many real-world applications, though it also
brings great challenges as model adaptation becomes harder without labeled
target data. In this paper, we address this issue by seeking compensation from
target domain prior knowledge, which is often (partially) available in
practice, e.g., from human expertise. This leads to a novel yet practical
setting where in addition to the training data, some prior knowledge about the
target class distribution are available. We term the setting as
Knowledge-guided Unsupervised Domain Adaptation (KUDA). In particular, we
consider two specific types of prior knowledge about the class distribution in
the target domain: Unary Bound that describes the lower and upper bounds of
individual class probabilities, and Binary Relationship that describes the
relations between two class probabilities. We propose a general rectification
module that uses such prior knowledge to refine model generated pseudo labels.
The module is formulated as a Zero-One Programming problem derived from the
prior knowledge and a smooth regularizer. It can be easily plugged into
self-training based UDA methods, and we combine it with two state-of-the-art
methods, SHOT and DINE. Empirical results on four benchmarks confirm that the
rectification module clearly improves the quality of pseudo labels, which in
turn benefits the self-training stage. With the guidance from prior knowledge,
the performances of both methods are substantially boosted. We expect our work
to inspire further investigations in integrating prior knowledge in UDA. Code
is available at https://github.com/tsun/KUDA.Comment: To appear in ECCV 202
FOUR COLOR OBSERVATIONS OF 2501 LOHJA
Photometric studies of asteroid 2501 Lohja were made between 2014 June 24 and 25 using the Southeastern Association for Research in Astronomy (SARA) Kitt Peak telescope with Bessell B, V, R and I filters. We obtained a synodic period of 3.81 ± 0.01h, which is consistent with previous values
Mask and Restore: Blind Backdoor Defense at Test Time with Masked Autoencoder
Deep neural networks are vulnerable to backdoor attacks, where an adversary
maliciously manipulates the model behavior through overlaying images with
special triggers. Existing backdoor defense methods often require accessing a
few validation data and model parameters, which are impractical in many
real-world applications, e.g., when the model is provided as a cloud service.
In this paper, we address the practical task of blind backdoor defense at test
time, in particular for black-box models. The true label of every test image
needs to be recovered on the fly from the hard label predictions of a
suspicious model. The heuristic trigger search in image space, however, is not
scalable to complex triggers or high image resolution. We circumvent such
barrier by leveraging generic image generation models, and propose a framework
of Blind Defense with Masked AutoEncoder (BDMAE). It uses the image structural
similarity and label consistency between the test image and MAE restorations to
detect possible triggers. The detection result is refined by considering the
topology of triggers. We obtain a purified test image from restorations for
making prediction. Our approach is blind to the model architectures, trigger
patterns or image benignity. Extensive experiments on multiple datasets with
different backdoor attacks validate its effectiveness and generalizability.
Code is available at https://github.com/tsun/BDMAE
Backdoor Cleansing with Unlabeled Data
Due to the increasing computational demand of Deep Neural Networks (DNNs),
companies and organizations have begun to outsource the training process.
However, the externally trained DNNs can potentially be backdoor attacked. It
is crucial to defend against such attacks, i.e., to postprocess a suspicious
model so that its backdoor behavior is mitigated while its normal prediction
power on clean inputs remain uncompromised. To remove the abnormal backdoor
behavior, existing methods mostly rely on additional labeled clean samples.
However, such requirement may be unrealistic as the training data are often
unavailable to end users. In this paper, we investigate the possibility of
circumventing such barrier. We propose a novel defense method that does not
require training labels. Through a carefully designed layer-wise weight
re-initialization and knowledge distillation, our method can effectively
cleanse backdoor behaviors of a suspicious network with negligible compromise
in its normal behavior. In experiments, we show that our method, trained
without labels, is on-par with state-of-the-art defense methods trained using
labels. We also observe promising defense results even on out-of-distribution
data. This makes our method very practical
Incentive Schemes in Peer-to-Peer Networks
In this paper we use mechanism design approach to find the optimal file-sharing mechanism in a peer-to-peer network. This mechanism improves upon existing incentive schemes. In particular, we show that peer-approved scheme is never optimal and service-quality scheme is optimal only under certain circumstances. Moreover, we find that the optimal mechanism can be implemented by a mixture of peer-approved and service-quality schemes
- …